In computer vision, Multisensor Image fusion is the process of combining relevant information from two or more images into a single image. The resulting image will be more informative than any of the input images.
In remote sensing applications, the increasing availability of space borne sensors gives a motivation for different image fusion algorithms. Several situations in image processing require high spatial and high spectral resolution in a single image. Most of the available equipment is not capable of providing such data convincingly. The image fusion techniques allow the integration of different information sources. The fused image can have complementary spatial and spectral resolution characteristics. However, the standard image fusion techniques can distort the spectral information of the multispectral data while merging.
In satellite imaging, two types of images are available. The panchromatic image acquired by satellites is transmitted with the maximum resolution available and the multispectral data are transmitted with coarser resolution. This will usually be two or four times lower. At the receiver station, the panchromatic image is merged with the multispectral data to convey more information.
Many methods exist to perform image fusion. The very basic one is the high pass filtering technique. Later techniques are based on DWT, uniform rational filter bank, and laplacian pyramid.
Contents |
Multisensor data fusion has become a discipline to which more and more general formal solutions to a number of application cases are demanded. Several situations in image processing simultaneously require high spatial and high spectral information in a single image. This is important in remote sensing. However, the instruments are not capable of providing such information either by design or because of observational constraints. One possible solution for this is data fusion..
Image fusion methods can be broadly classified into two - spatial domain fusion and transform domain fusion.
The fusion methods such as averaging, Brovey method, principal component analysis (PCA) and IHS based methods fall under spatial domain approaches. Another important spatial domain fusion method is the high pass filtering based technique. Here the high frequency details are injected into upsampled version of MS images. The disadvantage of spatial domain approaches is that they produce spatial distortion in the fused image. Spectral distortion becomes a negative factor while we go for further processing, such as classification problem. Spatial distortion can be very well handled by transform domain approaches on image fusion. The multiresolution analysis has become a very useful tool for analysing remote sensing images. The discrete wavelet transform has become a very useful tool for fusion. Some other fusion methods are also there, such as Lapacian pyramid based, curvelet transform based etc. These methods show a better performance in spatial and spectral quality of the fused image compared to other spatial methods of fusion.
The images used in image fusion should already be registered. Misregistration is a major source of error in image fusion. Some well-known image fusion methods are:
Several methods are there for merging satellite images. In satellite imagery we can have two types of images
The SPOT PAN satellite provides high resolution (10m pixel) panchromatic data. While the LANDSAT TM satellite provides low resolution (30m pixel) multispectral images. Image fusion attempts to merge these images and produce a single high resolution multispectral image.
The standard merging methods of image fusion are based on Red-Green-Blue (RGB) to Intensity-Hue-Saturation (IHS) transformation. The usual steps involved in satellite image fusion are as follows:
An explanation of how to do Pan-sharpening in Photoshop.
Image fusion has become a common term used within medical diagnostics and treatment. The term is used when multiple patient images are registered and overlaid or merged to provide additional information. Fused images may be created from multiple images from the same imaging modality [1], or by combining information from multiple modalities [2], such as magnetic resonance image (MRI), computed tomography (CT), positron emission tomography (PET), and single photon emission computed tomography (SPECT). In radiology and radiation oncology, these images serve different purposes. For example, CT images are used more often to ascertain differences in tissue density while MRI images are typically used to diagnose brain tumors.
For accurate diagnoses, radiologists must integrate information from multiple image formats. Fused, anatomically-consistent images are especially beneficial in diagnosing and treating cancer. Companies such as Nicesoft, Velocity Medical Solutions, Mirada Medical, Keosys, MIMvista, IKOE, and BrainLAB have recently created image fusion software for both improved diagnostic reading, and for use in conjunction with radiation treatment planning systems. With the advent of these new technologies, radiation oncologists can take full advantage of intensity modulated radiation therapy (IMRT). Being able to overlay diagnostic images onto radiation planning images results in more accurate IMRT target tumor volumes.